16 research outputs found

    Greedy adaptive algorithms for sparse representations

    Get PDF
    A vector or matrix is said to be sparse if the number of non-zero elements is significantly smaller than the number of zero elements. In estimation theory the vectors of model parameters can be known in advance to have a sparse structure, and solving an estimation problem taking into account this constraint can improve substantially the accuracy of the solution. The theory of sparse models has advanced significantly in recent years providing many results that can guarantee certain properties of the sparse solutions. These performance guarantees can be very powerful in applications and they have no correspondent in the estimation theory for non-sparse models. Model sparsity is an inherent characteristic of many applications (image compressing, wireless channel estimation, direction of arrival) in signal processing and other related areas.Due to the continuous technological advances that allow faster numerical computations, optimization problems, too complex to be solved in the past, are now able to provide better solutions by considering also sparsity constraints. However, an exhaustive search to finding sparse solutions generally requires a combinatorial search for the correct support, a very limiting factor due to the huge numerical complexity. This motivated a growing interest towards developing batch sparsity aware algorithms in the past twenty years. More recently, the main goal for the continuous research related to sparsity is the quest for faster, less computational intensive, adaptive methods able to recursively update the solution. In this thesis we present several such algorithms. They are greedy in nature and minimize the least squares criterion under the constraint that the solution is sparse. Similarly to other greedy sparse methods, two main steps are performed once new data are available: update the sparse support by changing the positions that contribute to the solution; compute the coefficients towards the minimization of the least squares criterion restricted to the current support. Two classes of adaptive algorithms were proposed. The first is derived from the batch matching pursuit algorithm. It uses a coordinate descent approach to update the solution, each coordinate being selected by a criterion similar to the one used by matching pursuit. We devised two algorithms that use a cyclic update strategy to improve the solution at each time instant. Since the solution support and coefficient values are assumed to vary slowly, a faster and better performing approach is later proposed by spreading the coordinate descent update in time. It was also adapted to work in a distributed setup in which different nodes communicate with their neighbors to improve their local solution towards a global optimum. The second direction can be linked to the batch orthogonal least squares. The algorithms maintain a partial QR decomposition with pivoting and require a permutation based support selection strategy to ensure a low complexity while allowing the tracking of slow variations in the support. Two versions of the algorithm were proposed. They allow past data to be forgotten by using an exponential or a sliding window, respectively. The former was modified to improve the solution in a structured sparsity case, when the solution is group sparse. We also propose mechanisms for estimating online the sparsity level. They are based on information theoretic criteria, namely the predictive least squares and the Bayesian information criterion. The main contributions are the development of the adaptive greedy algorithms and the use of the information theoretic criteria enabling the algorithms to behave robustly. The algorithms have good performance, require limited prior information and are computationally efficient. Generally, the configuration parameters, if they exist, can be easily chosen as a tradeoff between the stationary error and the convergence speed

    A randomised primal-dual algorithm for distributed radio-interferometric imaging

    Get PDF
    Next generation radio telescopes, like the Square Kilometre Array, will acquire an unprecedented amount of data for radio astronomy. The development of fast, parallelisable or distributed algorithms for handling such large-scale data sets is of prime importance. Motivated by this, we investigate herein a convex optimisation algorithmic structure, based on primal-dual forward-backward iterations, for solving the radio interferometric imaging problem. It can encompass any convex prior of interest. It allows for the distributed processing of the measured data and introduces further flexibility by employing a probabilistic approach for the selection of the data blocks used at a given iteration. We study the reconstruction performance with respect to the data distribution and we propose the use of nonuniform probabilities for the randomised updates. Our simulations show the feasibility of the randomisation given a limited computing infrastructure as well as important computational advantages when compared to state-of-the-art algorithmic structures.Comment: 5 pages, 3 figures, Proceedings of the European Signal Processing Conference (EUSIPCO) 2016, Related journal publication available at https://arxiv.org/abs/1601.0402

    An accelerated splitting algorithm for radio-interferometric imaging: when natural and uniform weighting meet

    Get PDF
    Next generation radio-interferometers, like the Square Kilometre Array, will acquire tremendous amounts of data with the goal of improving the size and sensitivity of the reconstructed images by orders of magnitude. The efficient processing of large-scale data sets is of great importance. We propose an acceleration strategy for a recently proposed primal-dual distributed algorithm. A preconditioning approach can incorporate into the algorithmic structure both the sampling density of the measured visibilities and the noise statistics. Using the sampling density information greatly accelerates the convergence speed, especially for highly non-uniform sampling patterns, while relying on the correct noise statistics optimises the sensitivity of the reconstruction. In connection to CLEAN, our approach can be seen as including in the same algorithmic structure both natural and uniform weighting, thereby simultaneously optimising both the resolution and the sensitivity. The method relies on a new non-Euclidean proximity operator for the data fidelity term, that generalises the projection onto the 2\ell_2 ball where the noise lives for naturally weighted data, to the projection onto a generalised ellipsoid incorporating sampling density information through uniform weighting. Importantly, this non-Euclidean modification is only an acceleration strategy to solve the convex imaging problem with data fidelity dictated only by noise statistics. We showcase through simulations with realistic sampling patterns the acceleration obtained using the preconditioning. We also investigate the algorithm performance for the reconstruction of the 3C129 radio galaxy from real visibilities and compare with multi-scale CLEAN, showing better sensitivity and resolution. Our MATLAB code is available online on GitHub

    Cygnus A super-resolved via convex optimisation from VLA data

    Get PDF
    We leverage the Sparsity Averaging Reweighted Analysis (SARA) approach for interferometric imaging, that is based on convex optimisation, for the super-resolution of Cyg A from observations at the frequencies 8.422GHz and 6.678GHz with the Karl G. Jansky Very Large Array (VLA). The associated average sparsity and positivity priors enable image reconstruction beyond instrumental resolution. An adaptive Preconditioned Primal-Dual algorithmic structure is developed for imaging in the presence of unknown noise levels and calibration errors. We demonstrate the superior performance of the algorithm with respect to the conventional CLEAN-based methods, reflected in super-resolved images with high fidelity. The high resolution features of the recovered images are validated by referring to maps of Cyg A at higher frequencies, more precisely 17.324GHz and 14.252GHz. We also confirm the recent discovery of a radio transient in Cyg A, revealed in the recovered images of the investigated data sets. Our matlab code is available online on GitHub.Comment: 14 pages, 7 figures (3/7 animated figures), accepted for publication in MNRA

    Robust sparse image reconstruction of radio interferometric observations with purify

    Get PDF
    Next-generation radio interferometers, such as the Square Kilometre Array (SKA), will revolutionise our understanding of the universe through their unprecedented sensitivity and resolution. However, to realise these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed inter\-ferometric images that are limited in quality and scalability for big data. In this work we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers (P-ADMM) algorithm presented in a recent article. First, we assess the impact of the interpolation kernel used to perform gridding and degridding on sparse image reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as well as prolate spheroidal wave functions, while providing a computational saving and an analytic form. Second, we apply PURIFY to real interferometric observations from the Very Large Array (VLA) and the Australia Telescope Compact Array (ATCA) and find images recovered by PURIFY are higher quality than those recovered by CLEAN. Third, we discuss how PURIFY reconstructions exhibit additional advantages over those recovered by CLEAN. The latest version of PURIFY, with developments presented in this work, is made publicly available.Comment: 22 pages, 10 figures, PURIFY code available at http://basp-group.github.io/purif

    Prioritization of urban green infrastructures for sustainable urban planning in Ploiesti, Romania

    Get PDF
    Urban green infrastructures are increasingly being used as instruments for achieving a sustainable urban planning due to their multifunctionality represented by the numerous economic, social and environmental benefits. Selecting the most appropriate type of urban green infrastructure to be developed in a certain city is most of the times an important challenge for planners. In our analysis, we developed a model for a multi-criteria evaluation of the components of urban green infrastructures using structural, functional, administrative and economic criteria. We used as a case study the city of Ploiesti, an industrial city of Romania, focused on oil processing. Ploiesti is one of the main engines of the Romanian economy with a tradition of over 100 years of oil industrial activity being characterized by a significant expansion of the build-up areas (especially industrial and technological site) in the outskirts of the city and a decrease of urban green area per capita. Policies and strategies to increase the density of the existing urban green infrastructure and to sustainably manage the existing ones represent a challenge for local authorities and other local actors and stakeholders as the balance between economic development and the city’s livability has to generate a proper quality of life for its inhabitants. Our results can drive to a more efficient urban planning and the use of the correct and appropriate urban green infrastructures elements in improving the quality of life and the environment. The analysis can be used for sustainable planning of urban green infrastructures in other cities lacking a proper amount of green areas

    Cellular and Molecular Homeostatic Microenvironmental imbalances in Osteoarthritis and Rheumatoid Arthritis

    Get PDF
    Human movement is a complex and multifactorial process due to the interaction between the body and the environment. Movement is the result of activities of all the structures that make up a joint (i.e., ligaments, tendons, muscles, fascicles, blood vessels, nerves, etc.) and of the control actions of the nervous system on them. Therefore, many pathological conditions can affect the Neu-ro-Myo-Arthro-Kinetic System (NMAK). Osteoarthritis (OA) is the degenerative form of arthritis with a high incidence and a pro-longed course that affects articular and periarticular tissues such as articular cartilage, subchondral bone, and synovium, a degenerative consequence. Instead, Rheumatoid arthritis (RA) is an immune-mediated synovial disease caused by a complex interaction between genetic and environmental factors. This review aims to compare Osteoar-thritis (OA) and Rheumatoid Arthritis (RA) in terms of pathogenesis and microenvironment and determine the main changes in a joint microenvironment regarding immunological defense elements and bioenergetics which can explain the pathological development with new thera-peutical opportunities

    Translation of the Fugl-Meyer assessment into Romanian: Transcultural and semantic-linguistic adaptations and clinical validation

    Get PDF
    PurposeThe Fugl-Meyer Assessment (FMA) scale, which is widely used and highly recommended, is an appropriate tool for evaluating poststroke sensorimotor and other possible somatic deficits. It is also well-suited for capturing a dynamic rehabilitation process. The aim of this study was to first translate the entire sensorimotor FMA scale into Romanian using the transcultural and semantic-linguistic adaptations of its official afferent protocols and to then validate it using the preliminary clinical evaluation of inter- and intra-rater reliability and relevant concurrent validity.MethodsThrough three main steps, we completed a standardized procedure for translating FMA's official afferent evaluation protocols into Romanian and their transcultural and semantic-linguistic adaptation for both the upper and lower extremities. For relevant clinical validation, we evaluated 10 patients after a stroke two times: on days 1 and 2. All patients were evaluated simultaneously by two kinesi-physiotherapists (generically referred to as KFT1 and KFT2) over the course of 2 consecutive days, taking turns in the roles of an examiner and observer, and vice versa (inter-rater). Two scores were therefore obtained and compared for the same patient, i.e., being afferent to an inter-rater assay by comparing the assessment outcomes obtained by the two kinesi-physiotherapists, in between, and respectively, to the intra-rater assay: based on the evaluations of the same kinesi-physiotherapist, in two consecutive days, using a rank-based method (Svensson) for statistical analysis. We also compared our final Romanian version of FMA's official protocols for concurrent validity (Spearman's rank correlation statistical method) to both of the widely available assessment instruments: the Barthel Index (BI) and the modified Rankin scale (mRS).ResultsSvensson's method confirmed overall good inter- and intra-rater results for the main parts of the final Romanian version of FMA's evaluation protocols, regarding the percentage of agreement (≥80% on average) and for disagreement: relative position [RP; values outside the interval of (−0.1, 0.1) in only two measurements out of the 56 comparisons we did], relative concentration [RC; values outside the interval of (−0.1, 0.1) in only nine measurements out of the same 56 comparisons done], and relative rank variation [RV; all values within an interval of (0, 0.1) in only five measurements out of the 56 comparisons done]. High correlation values were obtained between the final Romanian version of FMA's evaluation protocols and the BI (ρ = 0.9167; p = 0.0002) for FMA–upper extremity (FMA-UE) total A-D (motor function) with ρ = 0.6319 and for FMA-lower extremity (FMA-LE) total E-F (motor function) with p = 0.0499, and close to the limit, with the mRS (ρ = −0.5937; p = 0.0704) for FMA-UE total A-D (motor function) and (ρ = −0.6615; p = 0.0372) for FMA-LE total E-F (motor function).ConclusionsThe final Romanian version of FMA's official evaluation protocols showed good preliminary reliability and validity, which could be thus recommended for use and expected to help improve the standardization of this assessment scale for patients after a stroke in Romania. Furthermore, this endeavor could be added to similar international translation and cross-cultural adaptations, thereby facilitating a more appropriate comparison of the evaluation and outcomes in the management of stroke worldwide

    Greedy adaptive algorithms for sparse representations

    Get PDF
    A vector or matrix is said to be sparse if the number of non-zero elements is significantly smaller than the number of zero elements. In estimation theory the vectors of model parameters can be known in advance to have a sparse structure, and solving an estimation problem taking into account this constraint can improve substantially the accuracy of the solution. The theory of sparse models has advanced significantly in recent years providing many results that can guarantee certain properties of the sparse solutions. These performance guarantees can be very powerful in applications and they have no correspondent in the estimation theory for non-sparse models. Model sparsity is an inherent characteristic of many applications (image compressing, wireless channel estimation, direction of arrival) in signal processing and other related areas.Due to the continuous technological advances that allow faster numerical computations, optimization problems, too complex to be solved in the past, are now able to provide better solutions by considering also sparsity constraints. However, an exhaustive search to finding sparse solutions generally requires a combinatorial search for the correct support, a very limiting factor due to the huge numerical complexity. This motivated a growing interest towards developing batch sparsity aware algorithms in the past twenty years. More recently, the main goal for the continuous research related to sparsity is the quest for faster, less computational intensive, adaptive methods able to recursively update the solution. In this thesis we present several such algorithms. They are greedy in nature and minimize the least squares criterion under the constraint that the solution is sparse. Similarly to other greedy sparse methods, two main steps are performed once new data are available: update the sparse support by changing the positions that contribute to the solution; compute the coefficients towards the minimization of the least squares criterion restricted to the current support. Two classes of adaptive algorithms were proposed. The first is derived from the batch matching pursuit algorithm. It uses a coordinate descent approach to update the solution, each coordinate being selected by a criterion similar to the one used by matching pursuit. We devised two algorithms that use a cyclic update strategy to improve the solution at each time instant. Since the solution support and coefficient values are assumed to vary slowly, a faster and better performing approach is later proposed by spreading the coordinate descent update in time. It was also adapted to work in a distributed setup in which different nodes communicate with their neighbors to improve their local solution towards a global optimum. The second direction can be linked to the batch orthogonal least squares. The algorithms maintain a partial QR decomposition with pivoting and require a permutation based support selection strategy to ensure a low complexity while allowing the tracking of slow variations in the support. Two versions of the algorithm were proposed. They allow past data to be forgotten by using an exponential or a sliding window, respectively. The former was modified to improve the solution in a structured sparsity case, when the solution is group sparse. We also propose mechanisms for estimating online the sparsity level. They are based on information theoretic criteria, namely the predictive least squares and the Bayesian information criterion. The main contributions are the development of the adaptive greedy algorithms and the use of the information theoretic criteria enabling the algorithms to behave robustly. The algorithms have good performance, require limited prior information and are computationally efficient. Generally, the configuration parameters, if they exist, can be easily chosen as a tradeoff between the stationary error and the convergence speed
    corecore